Why Some Machines May Need Qualia and How They Can Have Them: Including a Demanding New Turing Test for Robot Philosophers

نویسنده

  • Aaron Sloman
چکیده

This paper extends three decades of work arguing that instead of focusing only on (adult) human minds, we should study many kinds of minds, natural and artificial, and try to understand the space containing all of them, by studying what they do, how they do it, and how the natural ones can be emulated in synthetic minds. That requires: (a) understanding sets of requirements that are met by different sorts of minds, i.e. the niches that they occupy, (b) understanding the space of possible designs, and (c) understanding the complex and varied relationships between requirements and designs. Attempts to model or explain any particular phenomenon, such as vision, emotion, learning, language use, or consciousness lead to muddle and confusion unless they are placed in that broader context. in part because current ontologies for specifying and comparing designs are inconsistent and inadequate. A methodology for making progress is summarised and a novel requirement proposed for human-like philosophical robots, namely that a single generic design, in addition to meeting many other more familiar requirements, should be capable of developing different and opposed viewpoints regarding philosophical questions about consciousness, and the socalled hard problem. No designs proposed so far come close. Could We Be Discussing Bogus Concepts? Many debates about consciousness appear to be endless because of conceptual confusions preventing clarity as to what the issues are and what does or does not count as progress. This makes it hard to decide what should go into a machine if it is to be described as ‘conscious’, or as ‘having qualia’. Triumphant demonstrations by some AI developers of machines with alleged competences (seeing, having emotions, learning, being autonomous, being conscious, having qualia, etc.) are regarded by others as proving nothing of interest because the systems do not satisfy their definitions or their requirements-specifications.1 Moreover, alleged demonstrations of programs with philosophically problematic features such as free will, qualia, or phenomenal consciousness, will be dismissed both Copyright c © 2007, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Everyone who has not yet read the trenchant observations in (McDermott 1981) about claims made by AI researchers should do so now. The arguments apply not only to Symbolic AI, which was dominant at the time it was written, but to all approaches to AI. by those researchers who deny that those phenomena can exist at all, even in humans, and by others who claim that the phenomena are definitionally related to being a product of evolution and, therefore, by definition, no artificial working model can be relevant. Most AI researchers in this area simply ignore all these issues, and assume that the definition they use for some key term is the right one (and perhaps cite some authority such as a famous philosopher or psychologist to support that assumption, as if academics in those fields all agreed on definitions). They then proceed to implement something which they believe matches their definition. One result is researchers talking past each other, unawares. In doing so they often re-invent ideas that have been previously discussed at length by others, including theories that were refuted long ago! Boden’s new historical survey (2006) should help to reduce such ignorance, but a radical change in education in the field is needed, to ensure that researchers know a lot more about the history of the subject and don’t all write as if the history had started a decade or two ago. (Many young AI researchers know only the literature recommended by their supervisors – because they transferred at PhD level from some other discipline and had no time to learn more than the minimum required for completing their thesis.) Some of the diversity of assumptions regarding what ‘consciousness’ is and how ‘it’ should be explained can be revealed by trawling through the archives of the psyche-d discussion forum: http://listserv.uh.edu/archives/psyche-d.html starting in 1993, showing how highly intelligent, and well educated, philosophers and scientists talk past one another. A list of controversies in cognitive systems research on the euCognition web site also helps to indicate the diversity of views in this general area: http://www.eucognition.org/wiki/ Unfortunately many researchers are unaware that their assumptions are controversial. The rest of this paper discusses two main areas of confusion, namely unclarity of concepts used to specify problems and unclarity of concepts used in describing designs. A new (hard) test for progress in this area is proposed. Some of the dangers and confusions in claims to have implemented some allegedly key notion of consciousness were pointed out in (Sloman&Chrisley 2003). For example, most people will say, if asked, that being asleep entails being unconscious. Yet many of those people, if asked on another occasion whether having a frightening nightmare involves being conscious, will answer ‘yes’: They believe you cannot be frightened of a lion chasing you without being conscious. Sleepwalking provides another example. It seems to be obviously true (a) that a sleepwalker who gets dressed, opens a shut door and then walks downstairs must have seen the clothes, the door-handle, and the treads on the staircase, (b) that anyone who sees things in the environment must be conscious and (c) that sleepwalkers are, by definition, asleep, and (d) that sleepwalkers are therefore unconscious. The lack of clarity in such concepts also emerges in various debates that seem to be unresolvable, e.g. debates on: Which animals have phenomenal consciousness? At what stage does a human foetus or infant begin to have it? Can you be conscious of something without being conscious that you are conscious of it – if so is there an infinite regress? The existence of inconsistent or divergent intuitions suggests that the common, intuitive notion of consciousness has so many flaws that it is not fit to be used in formulating scientific questions or engineering goals, since it will never be clear whether the questions have been answered or whether the goals have been achieved. Attempts to avoid this unclarity by introducing new, precise definitions, e.g. distinguishing ‘phenomenal’ from ‘access’ consciousness, or talking about ‘what it is like to be something’ (Nagel 1981) all move within a circle of ill-defined notions, without clearly identifying some unique thing that has to be explained. (As I was finishing this paper the latest issue of Journal of Consciousness Studies Vol 14,9-10, 2007 arrived. The editor’s introduction makes some of these points.) Understanding What Evolution Has Done The inability of researchers to identify a single core concept to focus research on is not surprising, since natural minds (biological control systems), and their varying forms of consciousness, are products of millions of years of evolution in which myriad design options were explored, most of which are still not understood: we know only fragments of what we are, and different researchers (psychologists, neuroscientists, linguists, sociologists, biologists, philosophers, novelists, ...) know different fragments. They are like the proverbial blind men trying to say what an elephant is on the basis of feeling different parts of an elephant.2 What we introspect may be as primitive in relation to what is really going on in our minds (our virtual machines, not our brains) as ancient perceptions of earth, air, fire and water were in relation to understanding the physical world. Neither the biological mechanisms that evolved for perceiving the physical environment nor those that evolved for perceiving what is going on in ourselves were designed to serve the purposes of scientific theorising and explaining, but rather to meet the requirements of everyday decision making, online control, and learning, although as the ‘everyday’ activities become more complex, more varied, and their goals more precise, those activities develop into the Read the poem by John Godfrey Saxe here: http://www.wordinfo.info/words/index/info/view unit/1 activities of science partly by revealing the need to extend our ontologies. Some will object that introspective beliefs are necessarily true, because you cannot be mistaken about how things seem to you (which is why they are sometimes thought to provide the foundations of all other knowledge). To cut a long story short, the incorrigibility of what you think you know or sense or remember or how things seem to you is essentially a tautology with no consequences, like the tautology that no measuring instrument can give an incorrect reading of what its reading is. The voltmeter can get the voltage wrong but it can’t be wrong about what it measure the voltage to be. No great metaphysical truths flow from that triviality. People who are puzzled about what consciousness is, what mechanisms make it possible, how it evolved, whether machines can have it, etc., can make progress if they replace questions referring to ‘it’ with a whole battery of questions referring to different capabilities that can occur in animals and machines with different designs. The result need not be some new deep concept corresponding to our pre-scientific notion of consciousness. It is more likely that we shall progress beyond thinking there is one important phenomenon to be explained. What needs to be explained is rarely evident at the start of a scientific investigation: it becomes clear only in the process of developing new concepts and explanatory theories, and developing new ways to check the implications of proposed theories. We did not knowwhat electromagnetic phenomena were and then find explanatory theories: rather, the development of new theories and techniques led to new knowledge of what those theories were required to explain, as well as the development of new concepts to express both the empirical observations and the explanatory theories, and our growing ability to perform tests to check the predictions of the theories (Cohen 1962). We now know of many more phenomena involving energy that need to be explained by theories of transformation and transmission of energy than were known to Newton. Likewise, new phenomena relating to consciousness also emerge from studies of hypnosis, drugs of various kinds, anaesthetic procedures, brain damage, the developing minds of young children, and studies of cognition in non-human animals. Different sorts of consciousness may be possible in a bacterium, a bee, a boa constrictor, a baboon, a human baby, a baseball fan, braindamaged humans, and, of course, various kinds of robots. Instead of one key kind of ‘natural’ consciousness that needs to be explained, there are very many complete designs each of which resulted from very many evolutionary design choices, and in some cases a combination of evolutionary decisions and developmental options (i.e. epigenesis – see Jablonka and Lamb (2005) ). For example, what a human can be aware of soon after birth is not the same as what it can be aware of one, five, ten or fifty years later. Likewise, the consequences of awareness change. Adopting the Design Stance Although AI researchers attempting to study consciousness start from different, and often inconsistent, facets of a very complex collection of natural phenomena, they do try to adopt the design stance (Dennett 1978), which, in principle can lead to new insights and new clarity. This involves specifying various functional designs for animals and robots and trying to define the states and processes of interest in terms of what sorts of things can happen when instances of such designs are working. Compare: different sorts of deadlock, or different sorts of external attack, can arise in different sorts of computer operating systems.3 The use of the design stance to clarify the notion of free will is illustrated in (Sloman 1992; Franklin 1995). The task is more complex for notions related to consciousness. But there are serious obstacles. In order to make progress, we require, but currently lack, a good set of concepts for describing and comparing different sets of requirements and different designs: we need ontologies for requirements and designs and for describing relations between requirements and designs when both are complex. Without such a conceptual framework we cannot expect to cope with the complex variety of biological designs and the even larger, because less constrained, space of possible artificial designs. Unfortunately, as shown below, different terms are used by different researchers to describe architectures, capabilities, and mechanisms, and often the same word is used with different interpretations. Don’t All Running Programs Introspect? McCarthy (1995) and Sloman (1978, ch 6) present reasons why various kinds of self-knowledge could be useful in a robot, but specifying a working design is another matter. Is there a clear distinction between systems with and without self-knowledge? The informal notion of self-awareness or self-consciousness is based on a product of evolution, namely the ability to introspect, which obviously exists in adult humans, and may exist in infants and in some other animals. How it develops in humans is not clear. Normal adult humans can notice and reflect on some of the contents of their own minds, for instance when they answer questions during an oculist’s examination, or when they report that they are bored, or hungry, or unable to tell the difference between two coloured patches, or that they did not realise they were angry. Some consciousness researchers attempt to focus only on verbal reports or other explicit behaviours indicating the contents of consciousness, but hardly anyone nowadays thinks the label “consciousness” refers to such behaviours. Many (though not all) would agree that what you are conscious of when looking at swirling rapids or trees waving in the breeze cannot be fully reported in available verbal or non-verbal behaviours. Available motor channels do not have sufficient bandwidth for that task. So most researchers have to fall back, whether explicitly or unwittingly, on results of their own introspection to identify what they are talking about. We designers do not have that limitation, since we can derive theories about unobservable processes going on inI call this a study of logical topography. Several logical geographies may be consistent with one logical topography. See http://www.cs.bham.ac.uk/research/projects/cogaff/misc/logicalgeography.html side complex virtual machines from the way they have been designed. The design stance naturally leads to specifications that refer to internal mechanisms, states and processes (in virtual machines4) that are not necessarily identifiable on the basis of externally observable behaviours. From the design standpoint, what ‘introspect’ means has to be specified in the context of a general ontology for describing architectures for organisms and robots: something we lack at present. Many simple designs can be described as having simple forms of introspection, including systems with feedback control loops such as those presented in (Braitenberg 1984). Many simple control mechanisms compare signals and expectations and modify actions on the basis of that comparison. If learning is included, more permanent modifications result. Those mechanisms all include primitive sorts of introspection. AI problem-solvers, planners, and theorem-provers need to be able to tell whether they have reached a goal state, and if not what possible internal actions are relevant to the current incomplete solution so that one or more of them can be selected to expand the search for a complete solution. Pattern driven rule-systems need information about which rules are applicable at any time and which bindings are possible for the variables in the rule-patterns. Even a simple conditional test in a program which checks whether the values in two registers are the same could be said to use introspection. And inputs to synapses in neural nets provide information about the states of other neurons. So any such system that goes beyond performing a rigidly pre-ordained sequence of actions must use introspection, and to that extent is self-conscious. That would make all non-trivial computer programs and all biological organisms self-conscious. Clearly that is not what most designers mean by ‘introspection’ and ‘self-conscious’. Why not? The examples given use only transient self-information. After a decision has been reached or a selection made the information used is no longer available. Enduring, explicit, information is required if comparisons are to be made about what happens in the system at different times. Moreover, the examples all involve very ‘low-level’ particles of information. For a system to know that it is working on a difficult problem, that its current reasoning processes or perceptual states are very different from past examples, or that it has not come closer to solving its problem, it would need ways of combining lots of detailed information and producing summary ‘high-level’ descriptions, using a meta-semantic ontology, that can be stored and re-used for different purposes. If it also needs to realise that something new has come up that is potentially more important than the task it is currently engaged in, it will need to be able to do different things concurrently, for instance performing one task while monitoring that process and comparing it with other processes. (Some examples relevant to learning to use numbers were given in chapter 8 of Sloman, 1978). So non-trivial introspection involves: An architecture

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Why Some Machines May Need Qualia and How They Can Have Them: Including a Demanding New Turing Test for Robot Philosopher

This paper extends three decades of work arguing that instead of focusing only on (adult) human minds, we should study many kinds of minds, natural and artificial, and try to understand the space containing all of them, by studying what they do, how they do it, and how the natural ones can be emulated in synthetic minds. That requires: (a) understanding sets of requirements that are met by diff...

متن کامل

Some improvements in fuzzy turing machines

In this paper, we improve some previous definitions of fuzzy-type Turing machines to obtain degrees of accepting and rejecting in a computational manner. We apply a BFS-based search method and some level’s upper bounds to propose a computational process in calculating degrees of accepting and rejecting. Next, we introduce the class of Extended Fuzzy Turing Machines equipped with indeterminacy s...

متن کامل

A Brief Philosophical Encounter with Science and Medicine

We show a lot of respect for science today. To back up our claims, we tend to appeal to scientific methods. It seems that we all agree that these methods are effective for gaining the truth. We can ask why science has its special status as a supplier of knowledge about our external world and our bodies. Of course, one should not always trust what scientists say. Nonetheless, epistemological jus...

متن کامل

Why Should We Have a Periodic Safety and Performance Program for Medical Devices

Nowadays, more than 10,000 different types of medical devices can be found in hospitals.These devices used in medical centers and hospitals for monitoring and treatment of patients require periodic safety and performance checking in order to have confidence in their functioning and operation. Physicians need better accurate medical measurements in order to better diagnose diseases, monitor pati...

متن کامل

Virtual Machinery and Evolution of Mind (Part 3) Meta-Morphogenesis: Evolution of Information-Processing Machinery

Much of Turing’s work was about how large numbers of relatively simple processes could cumulatively produce qualitatively new large scale results e.g. Turing machine operations producing results comparable to results of human mathematical reasoning, and micro-interactions in physicochemical structures producing global transformations as a fertilized egg becomes an animal or plant. In the same s...

متن کامل

Aaron Sloman and Ron Chrisley Virtual Machines and Consciousness

Replication or even modelling of consciousness in machines requires some clarifications and refinements of our concept of consciousness. Design of, construction of, and interaction with artificial systems can itself assist in this conceptual development. We start with the tentative hypothesis that although the word ‘consciousness’ has no well defined meaning, it is used to refer to aspects of h...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007